机器人系统的长期自主权隐含地需要可靠的平台,这些平台能够自然处理硬件和软件故障,行为问题或缺乏知识。基于模型的可靠平台还需要在系统开发过程中应用严格的方法,包括使用正确的构造技术来实现机器人行为。随着机器人的自治水平的提高,提供系统可靠性的提供成本也会增加。我们认为,自主机器人的可靠性可靠性可以从几种认知功能,知识处理,推理和元评估的正式模型中受益。在这里,我们为自动机器人代理的认知体系结构的生成模型提出了案例,该模型订阅了基于模型的工程和可靠性,自主计算和知识支持机器人技术的原则。
translated by 谷歌翻译
How would you fairly evaluate two multi-object tracking algorithms (i.e. trackers), each one employing a different object detector? Detectors keep improving, thus trackers can make less effort to estimate object states over time. Is it then fair to compare a new tracker employing a new detector with another tracker using an old detector? In this paper, we propose a novel performance measure, named Tracking Effort Measure (TEM), to evaluate trackers that use different detectors. TEM estimates the improvement that the tracker does with respect to its input data (i.e. detections) at frame level (intra-frame complexity) and sequence level (inter-frame complexity). We evaluate TEM over well-known datasets, four trackers and eight detection sets. Results show that, unlike conventional tracking evaluation measures, TEM can quantify the effort done by the tracker with a reduced correlation on the input detections. Its implementation is publicly available online at https://github.com/vpulab/MOT-evaluation.
translated by 谷歌翻译
Reinforcement learning is a machine learning approach based on behavioral psychology. It is focused on learning agents that can acquire knowledge and learn to carry out new tasks by interacting with the environment. However, a problem occurs when reinforcement learning is used in critical contexts where the users of the system need to have more information and reliability for the actions executed by an agent. In this regard, explainable reinforcement learning seeks to provide to an agent in training with methods in order to explain its behavior in such a way that users with no experience in machine learning could understand the agent's behavior. One of these is the memory-based explainable reinforcement learning method that is used to compute probabilities of success for each state-action pair using an episodic memory. In this work, we propose to make use of the memory-based explainable reinforcement learning method in a hierarchical environment composed of sub-tasks that need to be first addressed to solve a more complex task. The end goal is to verify if it is possible to provide to the agent the ability to explain its actions in the global task as well as in the sub-tasks. The results obtained showed that it is possible to use the memory-based method in hierarchical environments with high-level tasks and compute the probabilities of success to be used as a basis for explaining the agent's behavior.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
互动主义模型引入了一种动态的语言,交流和认知方法。在这项工作中,我们在对话对话系统(SDS)的对话建模的背景下探讨了这一基本理论。为了扩展这样的理论框架,我们提出了一组设计原则,这些设计原则遵守中央心理语言和交流理论,以实现SDS中的互动主义。通过这些,关键思想可以构成我们提出的设计原则的基础。
translated by 谷歌翻译
本文在线学习和优化框架内提出并开发了一种用于电力市场中风能交易的新算法。特别是,我们将梯度下降算法的组成部分自适应变体与功能驱动的新闻册模型的最新进展相结合。这导致了一种在线产品的方法,能够利用数据丰富的环境,同时适应能源发电和发电市场的非平稳特征,并且具有最小的计算负担。根据几个数值实验,对我们的方法的性能进行了分析,既显示了对非平稳性不确定参数的更好适应性和显着的经济增长。
translated by 谷歌翻译
椭圆测量技术允许测量材料的极化信息,需要具有不同灯和传感器配置的光学组件的精确旋转。这会导致繁琐的捕获设备,在实验室条件下仔细校准,并且在很长的获取时间,通常按照每个物体几天的顺序。最近的技术允许捕获偏振偏光的反射率信息,但仅限于单个视图,或涵盖所有视图方向,但仅限于单个均匀材料制成的球形对象。我们提出了稀疏椭圆测量法,这是一种便携式偏光获取方法,同时同时捕获极化SVBRDF和3D形状。我们的手持设备由现成的固定光学组件组成。每个物体的总收购时间在二十分钟之间变化,而不是天数。我们开发了一个完整的极化SVBRDF模型,其中包括分散和镜面成分以及单个散射,并通过生成模型来设计一种新型的极化逆渲染算法,并通过数据增强镜面反射样品的数据增强。我们的结果表明,与现实世界对象捕获的极化BRDF的最新基础数据集有很强的一致性。
translated by 谷歌翻译
自动生物医学图像分析的领域至关重要地取决于算法验证的可靠和有意义的性能指标。但是,当前的度量使用通常是不明智的,并且不能反映基本的域名。在这里,我们提出了一个全面的框架,该框架指导研究人员以问题意识的方式选择绩效指标。具体而言,我们专注于生物医学图像分析问题,这些问题可以解释为图像,对象或像素级别的分类任务。该框架首先编译域兴趣 - 目标结构 - ,数据集和算法与输出问题相关的属性的属性与问题指纹相关,同时还将其映射到适当的问题类别,即图像级分类,语义分段,实例,实例细分或对象检测。然后,它指导用户选择和应用一组适当的验证指标的过程,同时使他们意识到与个人选择相关的潜在陷阱。在本文中,我们描述了指标重新加载推荐框架的当前状态,目的是从图像分析社区获得建设性的反馈。当前版本是在由60多个图像分析专家的国际联盟中开发的,将在社区驱动的优化之后公开作为用户友好的工具包提供。
translated by 谷歌翻译
The strong few-shot in-context learning capability of large pre-trained language models (PLMs) such as GPT-3 is highly appealing for application domains such as biomedicine, which feature high and diverse demands of language technologies but also high data annotation costs. In this paper, we present the first systematic and comprehensive study to compare the few-shot performance of GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on two highly representative biomedical information extraction tasks, named entity recognition and relation extraction. We follow the true few-shot setting to avoid overestimating models' few-shot performance by model selection over a large validation set. We also optimize GPT-3's performance with known techniques such as contextual calibration and dynamic in-context example retrieval. However, our results show that GPT-3 still significantly underperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3 in-context learning also yields smaller gains in accuracy when more training data becomes available. Our in-depth analyses further reveal issues of the in-context learning setting that may be detrimental to information extraction tasks in general. Given the high cost of experimenting with GPT-3, we hope our study provides guidance for biomedical researchers and practitioners towards more promising directions such as fine-tuning small PLMs.
translated by 谷歌翻译
最近的神经结构搜索(NAS)解决方案已经生产出令人印象深刻的结果培训超级网络,然后派生子网,A.K.A.儿童模型从预定义的搜索空间中胜过专家制作的模型。可以为资源受限的边缘设备选择高效且强大的子网,允许它们在野外执行良好。然而,构建任意架构的超级网络仍然是一种挑战,通常可以防止采用这些方法。为了解决这一挑战,我们呈现Bootstrapnas,这是一种自动生成NAS的超网络的软件框架。 Bootstrapnas从流行的体系结构,例如Reset-50或有效的自定义设计中获取预先训练的模型,并自动创建超网络,然后使用最先进的NAS技术来训练超级网络,导致子网,显着优于给定的预先训练模型。我们通过从任意模型存储库生成超级网络并提供结果的超网络来展示解决方案,以获得结果的再现性。
translated by 谷歌翻译